Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Full domain chain snap sync. #3115

Closed
wants to merge 31 commits into from
Closed

Conversation

shamil-gadelshin
Copy link
Member

@shamil-gadelshin shamil-gadelshin commented Oct 10, 2024

This PR introduces snap sync for the domain chain (#3026).

It supersedes the following PRs:

Initial commit list description

  • commit 1 (64e54ef Add peer consensus for the last confirmed domain block receipt.) adds a verification for the last confirmed domain block receipt acquired from the remote peer
  • commit 2 (40758d0 Introduce snap sync orchestrator.) adds a process orchestrator (it blocks/resumes synchronization of the consensus and domain chains, enables consensus chain block processing at the domain chain side, blocks/resumes bundle production, etc..)
  • commit 3 (b7801f6 Introduce consensus sync params struct.) contains new struct for domain snap sync parameters and minor comment changes
  • commit 4 (ce5f5f4 Add domain snap sync algorithm.) contains the new domain snap-sync algorithm with all recent changes. It was introduced here previously: Add domain state block verification for domain snap sync. #3053
  • commit 5 (7123fd6c Update MMR-sync) contains updates for MMR-sync sub-algorithm - MMR-data verification and removed dependency from FINALIZATION_DEPTH_IN_SEGMENTS. These changes were introduced here previously: Add MMR data verification. #3037 Update MMR sync. #3104
  • commit 6 (2654ca77 Update consensus snap-sync) updates consensus snap sync algorithm by introducing a blocking call for the consensus snap sync target block with the default non-blocking implementation for the separate consensus chain snap sync.
  • commit 7 (638928aa Modify domain networking stack.) copies a method from substrate SDK (build_network) and modifies it by exporting block_downloader and network_request_handle to use in the domain snap sync workflow
  • commit 8 (d2e41e47 Update domain service (domain snap-sync integration)) updates domain-service crate and integrates the domain snap sync algorithm, it also integrates previous networking changes as well as disables genesis state creation on domain snap syncing
  • commit 9 (ed974a64 Update domain operator (domain snap-sync integration)) updates domain-operator crate and integrates domain snap sync: it starts a separate task for domain snap sync within start_worker function, modifies consensus chain block import by pausing until consensus sync success and caching incoming block number for the future processing
  • commit 10 (9f24f98e Update relayer (domain snap-sync integration)) - modifies domain-client-message-relayer by pausing message relaying until domain snap sync is ready
  • commit 11 (87376aa4 Update subspace-service (domain snap-sync integration)) integrates domain snap sync into subspace-service and passes new parameters down the stack
  • commit 12 (1090eebf Update subspace-node (domain snap-sync integration)) integrates domain snap sync into subspace-node, creates high-level integration parameters between consensus and domain chains, removes obsolete CLI parameters checks
  • commit 13 (378eb41f Update test dependencies (domain snap-sync integration)) updates related dependencies in subspace-malicious-node and domain-test-service crates
  • commit 14 (785a1ddc Update Cargo.lock) - reflects all changes in Cargo.lock

Further tasks and optimizations

Code contributor checklist:

Copy link
Contributor

@teor2345 teor2345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, but I don't have enough context to approve.

Cargo.lock Outdated Show resolved Hide resolved
domains/client/domain-operator/src/snap_sync.rs Outdated Show resolved Hide resolved
domains/client/domain-operator/Cargo.toml Outdated Show resolved Hide resolved
Copy link
Member

@NingLin-P NingLin-P left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense in general with some nits/questions, will take another look

crates/subspace-service/src/domains.rs Outdated Show resolved Hide resolved
domains/client/domain-operator/src/domain_worker.rs Outdated Show resolved Hide resolved
Comment on lines 313 to 321
let cache_result = maybe_cache_block_info(
block_info.clone(),
target_block_number,
snap_sync_orchestrator.clone(),
&mut block_queue,
&bundle_processor,
span.clone(),
)
.await;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to our previous discussion, I thought we should disable the block import notification until the snap sync is finished so we don't need maybe_cache_block_info here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIU, it will cause a deadlock - we need consensus chain import working because we snap sync consensus chain to the segment starting block and need to import some blocks before we reach the consensus chain block related to the last confirmed domain block.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I see. But even so, we don't need to cache and process every block import notification, the underlying domain-block-processor doesn't make assumptions about the notification stream to emit notification for every block, this was used to handle cases like chain re-org and node restart, see:

let maybe_pending_consensus_blocks = self
.domain_block_processor
.pending_imported_consensus_blocks(consensus_block_hash, consensus_block_number)?;
if let Some(PendingConsensusBlocks {
initial_parent,
consensus_imports,
}) = maybe_pending_consensus_blocks
{
tracing::trace!(
?initial_parent,
?consensus_imports,
"Pending consensus blocks to process"
);
let mut domain_parent = initial_parent;
for consensus_info in consensus_imports {

So we don't need to cache the notification but instead ignore it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The caching method ignores the block number less than the target block. Did I understand your comment correctly?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The caching method ignores the block number less than the target block.

Actually, ignore all the block import notifications until the domain sync is finished, the above linked code will properly handle all the missed block notifications.

Comment on lines 210 to 218
if let Some(ref snap_sync_orchestrator) = snap_sync_orchestrator {
if !snap_sync_orchestrator.domain_snap_sync_finished(){
debug!(
"Domain snap sync: skipping bundle production on slot {slot}"
);

continue
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The slot notification will be skipped if the consensus chain is in major sync so this check may be unnecessary

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absence of this check produces this error:

 ERROR Domain: domain_client_operator::domain_worker: Error at producing bundle. slot=Slot(119) err=Backend("Domain block hash for #529 not found")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This (and the relayer error) is probably because the consensus chain sync oracle is used, when consensus chain sync finishes the domain chain sync just starts so the domain block is still missing.

We can use a customized domain sync oracle here and in other places (e.g. the relayer) which:

  • When full sync is used, return the result of the consensus chain sync oracle
  • When domain snap sync is used, return the result of snap_sync_orchestrator.domain_snap_sync_finished

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understood your comment. This error most likely arises when we try to access the inaccessible block and it makes sense because we don't have those blocks then. The relayer error seems to be caused by the missing runtime state with a similar reason.

What sync oracle do you have in mind?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nazar described a similar solution offline mentioning SubspaceSyncOracle

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error most likely arises when we try to access the inaccessible block and it makes sense because we don't have those blocks then.

Yeah, what I propose is to skip processing the slot notification/block import notification/relayer until the domain sync is finished (at that point the required domain block and state should be available). Using snap_sync_orchestrator.domain_snap_sync_finished also work in some places but IMO sync oracle is a better abstraction.

Nazar described a similar solution offline mentioning SubspaceSyncOracle

Maybe the same thing, but we need a DomainSyncOracle which can't be used by the consensus chain.

domains/client/domain-operator/src/domain_worker.rs Outdated Show resolved Hide resolved
Comment on lines 180 to 188
if let Some(ref snap_sync_orchestrator) = snap_sync_orchestrator {
if !snap_sync_orchestrator.domain_snap_sync_finished() {
tracing::debug!(
target: LOG_TARGET,
"Domain snap sync: skipping message processing..."
);
continue;
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The relayer will skip the block when the consensus chain is in major sync (we can change it to check if the domain chain is in major sync if that is supported) so this check may be unnecessary.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It produces the following error:
2024-10-14T14:17:25.818009Z ERROR Domain: message::relayer: Failed to submit messages from the chain Domain(DomainId(0)) at the block (815 err=FetchAssignedMessages

Copy link
Member

@nazar-pc nazar-pc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started reviewing, but I have Déjà vu. I have already seen this code and already said it was hard to understand because things are added in separate commits without an easy way to understand how they are related to each other. Instead of helping with review it actually makes things harder to review by unintentionally breaking links between directly related code changes.

Looking at orchestrator I have the same comments as before: it is not immediately clear to me why it needs to exit if things like blocking sync can already be done without orchestrator and target block number could have been a one-shot channel. The fact that orchestrator is dropped in a separate commit doesn't really help understanding why it is necessary.

Would it be possible to rearrange it in some other way? I can try to review as is, but I basically can only look at the whole diff because individual commits are not helpful.

Comment on lines 117 to 118
const ATTEMPTS_NUMBER: u32 = 5;
const PEERS_THRESHOLD: usize = 5;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this documented in the spec somewhere? We have much higher numbers for the consensus chain.

Also it is usually nice to have these at the top of the file for easier discovery.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved the constants and increased their values for now, however, it seems we'll change either their values or meaning because of the issue found by Ning in the comments below.

crates/subspace-service/src/sync_from_dsn/snap_sync.rs Outdated Show resolved Hide resolved
@@ -43,13 +60,15 @@ pub(crate) async fn snap_sync<Block, AS, Client, PG>(
sync_service: Arc<SyncingService<Block>>,
network_service_handle: NetworkServiceHandle,
erasure_coding: ErasureCoding,
target_block_provider: Arc<dyn SnapSyncTargetBlockProvider>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about making this Option<BlockNumber> instead and decide how to orchestrate block number retrieval externally?

Specifically with other suggestions it'll be simply awaiting on a oneshot channel before starting snap sync, shouldn't need a while provider just for oneshot receiver.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I follow you here.

The actual sync function works with the target_block: Option<BlockNumber as you suggested, snap_sync wrapper handles the acquisition of the target block. We can move this acquisition up the call stack but I consider that more confusing. Also, the target block provider trait incapsulates the default non-blocking behavior of the consensus chain sync and IMO is better than the one-shot receiver because of it.

- move ATTEMPTS_NUMBER and PEERS_THRESHOLD constants and increase their values
@shamil-gadelshin
Copy link
Member Author

The last commit batch addresses the PR comments and offline discussions with the team:

  1. 9dd9c98d Remove unnecessary AuxStore field from ConsensusChainSyncParams : minor refactoring
  2. f00d0654 Change waiting for blocks to block notification acknowledgment: changes block waiting via client.info() queries to block import notifications handling
  3. 3f16ef94 Modify SubspaceSyncOracle to add domain snap sync. : modifies SubspaceSyncOracle to return major_syncing=true during domain snap sync
  4. cc6ca72e Refactor SnapSyncOrchestrator: simplify the SnapSyncOrchestrator and remove unnecessary dependencies
  5. e9ff5011 Move domain block receipt protocol to domain network: prepares the code to the confirmed domain block execution receipt protocol changes by moving the protocol to domain chain network from the consensus chain network
  6. b6c7c6d9 Update last confirmed domain block execution receipt protocol: updates the protocol by returning up to 10 receipts from each connected peer (addressing Ning's comment above)

@nazar-pc
Copy link
Member

There is 23 comments in this PR, do you think you can squash patches back into older commits? I wouldn't be able to review and make sense of just new commits I'm afraid and having a shorter list of commits that do not have things that were removed later is very helpful.

@shamil-gadelshin
Copy link
Member Author

There is 23 comments in this PR, do you think you can squash patches back into older commits? I wouldn't be able to review and make sense of just new commits I'm afraid and having a shorter list of commits that do not have things that were removed later is very helpful.

It's possible. I'll do that after finishing a conversation with Ning.

Copy link
Contributor

@teor2345 teor2345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks good, I’ll try to follow up with a suggestion for the unsafe code later today.

There’s a lot of code here, is this PR targeting before or after the launch?

crates/sc-consensus-subspace/src/slot_worker.rs Outdated Show resolved Hide resolved
@@ -106,6 +107,11 @@ where
// (default state), it also accounts for DSN sync
(!self.force_authoring && self.inner.is_major_syncing())
|| self.pause_sync.load(Ordering::Acquire)
|| self
.domain_sync_finished
.as_ref()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: this is an unusually strong ordering, which might have a performance impact. Usually it is enough to use Acquire for loads and Release for stores.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll set it to the proposed values. However, I don't think performance suffers here because of the access pattern (write once in total, query once per second). Please, correct me if I'm wrong.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides the domain worker, SubspaceSyncOracle is also used in other places of the consensus chain like the archiver, PoT worker and RPC, etc, and if the domain sync is not finished it will change the return value of is_major_syncing and some behaviors of these workers, better to have more eyes on this to see if it is okay.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The SeqCst ordering has an impact on read and write performance, because it imposes a total order across all threads:
https://doc.rust-lang.org/std/sync/atomic/enum.Ordering.html#variant.SeqCst

crates/subspace-service/src/domains.rs Outdated Show resolved Hide resolved
/// Signal that domain snap sync finished.
pub fn mark_domain_snap_sync_finished(&self) {
debug!("Signal that domain snap sync finished.");
self.domain_snap_sync_finished.store(true, Ordering::SeqCst);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to my last comment, Release should be enough here:
https://doc.rust-lang.org/std/sync/atomic/enum.Ordering.html

Copy link
Contributor

@teor2345 teor2345 Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just un-resolving this because Ning wanted more people to review the code.

domains/client/domain-operator/src/operator.rs Outdated Show resolved Hide resolved
domains/client/domain-operator/src/operator.rs Outdated Show resolved Hide resolved
domains/client/domain-operator/src/operator.rs Outdated Show resolved Hide resolved
domains/client/domain-operator/src/operator.rs Outdated Show resolved Hide resolved
teor2345 and others added 3 commits October 28, 2024 20:23
Copy link
Member Author

@shamil-gadelshin shamil-gadelshin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There’s a lot of code here, is this PR targeting before or after the launch?

We won't need domains on the launch. So, it's "post-mainnet" feature.

@@ -106,6 +107,11 @@ where
// (default state), it also accounts for DSN sync
(!self.force_authoring && self.inner.is_major_syncing())
|| self.pause_sync.load(Ordering::Acquire)
|| self
.domain_sync_finished
.as_ref()
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll set it to the proposed values. However, I don't think performance suffers here because of the access pattern (write once in total, query once per second). Please, correct me if I'm wrong.

domains/client/domain-operator/src/operator.rs Outdated Show resolved Hide resolved
domains/client/domain-operator/src/operator.rs Outdated Show resolved Hide resolved
Comment on lines 263 to 295
// Wait for Subspace block importing notifications
let mut block_importing_notification_stream =
Box::pin(params.operator_streams.block_importing_notification_stream);

while let Some((block_number, mut acknowledgement_sender)) =
block_importing_notification_stream.next().await
{
trace!(%block_number, "Acknowledged block import from consensus chain.");
if acknowledgement_sender.send(()).await.is_err() {
return Err(sp_consensus::Error::Other(
format!("Can't acknowledge block import #{}", block_number).into(),
));
}

if block_number >= target_block_number.into() {
break;
}
}

// Drain Substrate block imported notifications
let mut imported_block_notification_stream =
Box::pin(params.operator_streams.imported_block_notification_stream);

while let Some(import_notification) =
imported_block_notification_stream.next().await
{
let block_number = *import_notification.header.number();
trace!(%block_number, "Block imported from consensus chain.");

if block_number >= target_block_number.into() {
break;
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the imported_block_notification_stream only poll after block_importing_notification_stream reaches the target block, at that point there may be too many items pending inside theimported_block_notification_stream and occupy a lot of memory, it may better if we can poll these 2 streams concurrently like:

let mut is_importing_reach_target = false;
let mut is_imported_reach_target = false;
loop {
    tokio::select! {
        Some(import_notification) = imported_block_notification_stream.next() if !is_imported_reach_target => {
            ...
            is_imported_reach_target = block_number >= target_block_number;
        },
        Some((block_number, mut acknowledgement_sender)) = block_importing_notification_stream.next() if !is_importing_reach_target => {
            ...
            is_importing_reach_target = block_number >= target_block_number;
        },
        else => {
            if is_importing_reach_target && is_imported_reach_target {
                break
            }
        }
    }
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expect only a segment of the blocks in the worst case (25-35k). The block importing notification consists mostly of the header and several other fields (in rare cases TreeRoute structure). Let's say it will have the size of 1KB on average, we will occupy 30MB in the memory for 5 min. I prefer the current straightforward code because of its simplicity and readability. Let me know if you think it's a critical point and we'll reconsider the snippet.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not critical but something can be improved, the snippet is also more succinct IMHO, non-blocker.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The underlying stream type is created here:
https://github.com/autonomys/polkadot-sdk/blob/819a5818284a96a5a5bd65ce67e69bab860d4534/substrate/client/service/src/client/client.rs#L1953

It will warn on 100,000 items, so that seems ok for now, but maybe we could leave a TODO here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the suggested snippet is more succinct, however, IMO it's better to separate those streams because the first one impacts other parts of the system and is a source of deadlocks in two networks. It's easier to deal with it this way.

@@ -106,6 +107,11 @@ where
// (default state), it also accounts for DSN sync
(!self.force_authoring && self.inner.is_major_syncing())
|| self.pause_sync.load(Ordering::Acquire)
|| self
.domain_sync_finished
.as_ref()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides the domain worker, SubspaceSyncOracle is also used in other places of the consensus chain like the archiver, PoT worker and RPC, etc, and if the domain sync is not finished it will change the return value of is_major_syncing and some behaviors of these workers, better to have more eyes on this to see if it is okay.

domains/client/domain-operator/src/operator.rs Outdated Show resolved Hide resolved
Comment on lines 331 to 352
"domain-operator-worker",
"domain-operator-worker-starter",
Copy link
Member

@NingLin-P NingLin-P Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I'm unable to see why we need both domain-operator-worker and domain-operator-worker-starter, could you give more detail about this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

start_worker_task waits for the target block from the blocking channel to clean importing streams and without running in a new async task it causes a deadlock.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really deadlock, please try to make the whole start_worker_task an async block e.g. let start_worker_task = async move { .. } and removing domain-operator-worker-starter should be fine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This snippet has two goals: 1) block start_worker method until we get the target block, and 2) unblock the operator code to allow getting the target block from the domain operator network.

The first goal was initially achieved within start_worker using the orchestrator but later was refactored to be controlled from the outside. The second goal was introduced when we moved the target block acquisition from the consensus node network to the domain node network (after the last confirmed domain block execution receipt protocol change).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I understand what led to this change, my point is we don't need the domain-operator-worker-starter as a wrapper of domain-operator-worker. IIUC what we want is 3 sequential steps:

  1. Get the target_block_number
  2. Drain the steams
  3. start_worker

This can be done by wrapping them into one async block:

let start_worker_task = async move {
	let target_block_number = target_block_receiver.recv().await;
	
	while let Some(...) = stream.next().await {
		...
	}

	crate::domain_worker::start_worker(...).boxed()
};
spawn_essential.spawn_essential_blocking(
	"domain-operator-worker",
	None,
	Box::pin(start_worker_task),
);

This doesn't work in the previous commit 60d45e9 because the start_worker_task is not an async block (i.e. let start_worker_task = { ... }) and is evaluated as a future of the last stepstart_worker but not a future of all the 3 steps, and first 2 steps will block the whole function Operator::new()

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You were correct. It does work. The original solution was developed evolutionarily but it is possible to simplify it indeed. Thanks!

Copy link
Member

@NingLin-P NingLin-P left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new last confirmed receipt protocol LGTM.

teor2345

This comment was marked as outdated.

teor2345

This comment was marked as outdated.

Copy link
Contributor

@teor2345 teor2345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all looks good to me.

I’m going to hold off approving it, because I understand we might want to make another Taurus release or two from the main branch.

Copy link
Member

@NingLin-P NingLin-P left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rest LGTM

crates/subspace-service/src/mmr/sync.rs Outdated Show resolved Hide resolved
crates/subspace-service/src/sync_from_dsn/snap_sync.rs Outdated Show resolved Hide resolved
Comment on lines 101 to +112
match snap_sync_fut.await {
Ok(()) => {
Ok(success) => {
debug!("Snap sync finished successfully");

success
}
Err(error) => {
error!(%error, "Snap sync failed");

false
}
}
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The return value of this future is Result<bool> but it seems like Result<()> would be enough as the bool is not used and dropped right away.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code has already been changed in the main branch. I'll merge it after the last approval.

domains/client/domain-operator/src/operator.rs Outdated Show resolved Hide resolved
domains/client/domain-operator/src/operator.rs Outdated Show resolved Hide resolved
domains/client/domain-operator/src/snap_sync.rs Outdated Show resolved Hide resolved
Copy link
Member

@NingLin-P NingLin-P left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@shamil-gadelshin
Copy link
Member Author

Superseded by #3196 to enable clean review by request from Nazar

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement domain chain snap-sync
4 participants